Slack and Margin Rescaling as Convex Extensions of Supermodular Functions

نویسنده

  • Matthew B. Blaschko
چکیده

Slack and margin rescaling are variants of the structured output SVM. They define convex surrogates to task specific loss functions, which, when specialized to nonadditive loss functions for multi-label problems, yield extensions to increasing set functions. We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions. We further show that slack and margin rescaling can be interpreted as dominating convex extensions over multiplicative and additive families, and that margin rescaling is strictly dominated by slack rescaling. However, we also demonstrate that, while the function value and gradient for margin rescaling can be computed in polynomial time, the same for slack rescaling corresponds to a non-supermodular maximization problem.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Lovász Hinge: A Convex Surrogate for Submodular Losses

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...

متن کامل

Learning Submodular Losses with the Lovasz Hinge

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...

متن کامل

Entropy and Margin Maximization for Structured Output Learning

We consider the problem of training discriminative structured output predictors, such as conditional random fields (CRFs) and structured support vector machines (SSVMs). A generalized loss function is introduced, which jointly maximizes the entropy and the margin of the solution. The CRF and SSVM emerge as special cases of our framework. The probabilistic interpretation of large margin methods ...

متن کامل

Minimizing Discrete Convex Functions with Linear Inequality Constraints

A class of discrete convex functions that can efficiently be minimized has been considered by Murota. Among them are L\-convex functions, which are natural extensions of submodular set functions. We first consider the problem of minimizing an L\-convex function with a linear inequality constraint having a positive normal vector. We propose a polynomial algorithm to solve it based on a binary se...

متن کامل

Fast and Scalable Structural SVM with Slack Rescaling

We present an efficient method for training slackrescaled structural SVM. Although finding the most violating label in a margin-rescaled formulation is often easy since the target function decomposes with respect to the structure, this is not the case for a slack-rescaled formulation, and finding the most violated label might be very difficult. Our core contribution is an efficient method for f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017